1,419 research outputs found

    Protecting biodiversity and economic returns in resource-rich tropical forests.

    Get PDF
    In pursuit of socioeconomic development, many countries are expanding oil and mineral extraction into tropical forests. These activities seed access to remote, biologically rich areas, thereby endangering global biodiversity. Here we demonstrate that conservation solutions that effectively balance the protection of biodiversity and economic revenues are possible in biologically valuable regions. Using spatial data on oil profits and predicted species and ecosystem extents, we optimise the protection of 741 terrestrial species and 20 ecosystems of the Ecuadorian Amazon, across a range of opportunity costs (i.e. sacrifices of extractive profit). For such an optimisation, giving up 5% of a year's oil profits (US221million)allowsforaprotectedareanetworkthatretainsofanaverageof65 221 million) allows for a protected area network that retains of an average of 65% of the extent of each species/ecosystem. This performance far exceeds that of the network produced by simple land area optimisation which requires a sacrifice of approximately 40% of annual oil profits (US 1.7 billion), and uses only marginally less land, to achieve equivalent levels of ecological protection. Applying spatial statistics to remotely sensed, historic deforestation data, we further focus the optimisation to areas most threatened by imminent forest loss. We identify Emergency Conservation Targets: areas that are essential to a cost-effective conservation reserve network and at imminent risk of destruction, thus requiring urgent and effective protection. Governments should employ the methods presented here when considering extractive led development options, to responsibly manage the associated ecological-economic trade-offs and protect natural capital. Article Impact Statement: Governments controlling resource extraction from tropical forests can arrange production and conservation to retain biodiversity and profits. This article is protected by copyright. All rights reserved

    A network-based dynamical ranking system for competitive sports

    Full text link
    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.Comment: 6 figure

    Developing Cost-Effective Inspection Sampling Plans for Energy-Efficiency Programs at Southern California Edison

    Full text link

    The impact of partially missing communities~on the reliability of centrality measures

    Full text link
    Network data is usually not error-free, and the absence of some nodes is a very common type of measurement error. Studies have shown that the reliability of centrality measures is severely affected by missing nodes. This paper investigates the reliability of centrality measures when missing nodes are likely to belong to the same community. We study the behavior of five commonly used centrality measures in uniform and scale-free networks in various error scenarios. We find that centrality measures are generally more reliable when missing nodes are likely to belong to the same community than in cases in which nodes are missing uniformly at random. In scale-free networks, the betweenness centrality becomes, however, less reliable when missing nodes are more likely to belong to the same community. Moreover, centrality measures in scale-free networks are more reliable in networks with stronger community structure. In contrast, we do not observe this effect for uniform networks. Our observations suggest that the impact of missing nodes on the reliability of centrality measures might not be as severe as the literature suggests

    Assortment optimisation under a general discrete choice model: A tight analysis of revenue-ordered assortments

    Full text link
    The assortment problem in revenue management is the problem of deciding which subset of products to offer to consumers in order to maximise revenue. A simple and natural strategy is to select the best assortment out of all those that are constructed by fixing a threshold revenue π\pi and then choosing all products with revenue at least π\pi. This is known as the revenue-ordered assortments strategy. In this paper we study the approximation guarantees provided by revenue-ordered assortments when customers are rational in the following sense: the probability of selecting a specific product from the set being offered cannot increase if the set is enlarged. This rationality assumption, known as regularity, is satisfied by almost all discrete choice models considered in the revenue management and choice theory literature, and in particular by random utility models. The bounds we obtain are tight and improve on recent results in that direction, such as for the Mixed Multinomial Logit model by Rusmevichientong et al. (2014). An appealing feature of our analysis is its simplicity, as it relies only on the regularity condition. We also draw a connection between assortment optimisation and two pricing problems called unit demand envy-free pricing and Stackelberg minimum spanning tree: These problems can be restated as assortment problems under discrete choice models satisfying the regularity condition, and moreover revenue-ordered assortments correspond then to the well-studied uniform pricing heuristic. When specialised to that setting, the general bounds we establish for revenue-ordered assortments match and unify the best known results on uniform pricing.Comment: Minor changes following referees' comment

    Representing complex data using localized principal components with application to astronomical data

    Full text link
    Often the relation between the variables constituting a multivariate data space might be characterized by one or more of the terms: ``nonlinear'', ``branched'', ``disconnected'', ``bended'', ``curved'', ``heterogeneous'', or, more general, ``complex''. In these cases, simple principal component analysis (PCA) as a tool for dimension reduction can fail badly. Of the many alternative approaches proposed so far, local approximations of PCA are among the most promising. This paper will give a short review of localized versions of PCA, focusing on local principal curves and local partitioning algorithms. Furthermore we discuss projections other than the local principal components. When performing local dimension reduction for regression or classification problems it is important to focus not only on the manifold structure of the covariates, but also on the response variable(s). Local principal components only achieve the former, whereas localized regression approaches concentrate on the latter. Local projection directions derived from the partial least squares (PLS) algorithm offer an interesting trade-off between these two objectives. We apply these methods to several real data sets. In particular, we consider simulated astrophysical data from the future Galactic survey mission Gaia.Comment: 25 pages. In "Principal Manifolds for Data Visualization and Dimension Reduction", A. Gorban, B. Kegl, D. Wunsch, and A. Zinovyev (eds), Lecture Notes in Computational Science and Engineering, Springer, 2007, pp. 180--204, http://www.springer.com/dal/home/generic/search/results?SGWID=1-40109-22-173750210-

    Aggregating Centrality Rankings: A Novel Approach to Detect Critical Infrastructure Vulnerabilities

    Get PDF
    Assessing critical infrastructure vulnerabilities is paramount to arrange efficient plans for their protection. Critical infrastructures are network-based systems hence, they are composed of nodes and edges. The literature shows that node criticality, which is the focus of this paper, can be addressed from different metric-based perspectives (e.g., degree, maximal flow, shortest path). However, each metric provides a specific insight while neglecting others. This paper attempts to overcome this pitfall through a methodology based on ranking aggregation. Specifically, we consider several numerical topological descriptors of the nodes’ importance (e.g., degree, betweenness, closeness, etc.) and we convert such descriptors into ratio matrices; then, we extend the Analytic Hierarchy Process problem to the case of multiple ratio matrices and we resort to a Logarithmic Least Squares formulation to identify an aggregated metric that represents a good tradeoff among the different topological descriptors. The procedure is validated considering the Central London Tube network as a case study

    Comparing Static and Dynamic Weighted Software Coupling Metrics

    Get PDF
    Coupling metrics that count the number of inter-module connections in a software system are an established way to measure internal software quality with respect to modularity. In addition to static metrics, which are obtained from the source or compiled code of a program, dynamic metrics use runtime data gathered, e.g., by monitoring a system in production. Dynamic metrics have been used to improve the accuracy of static metrics for object-oriented software. We study weighted dynamic coupling that takes into account how often a connection (e.g., a method call) is executed during a system’s run. We investigate the correlation between dynamic weighted metrics and their static counterparts. To compare the different metrics, we use data collected from four different experiments, each monitoring production use of a commercial software system over a period of four weeks. We observe an unexpected level of correlation between the static and the weighted dynamic case as well as revealing differences between class- and package-level analyses

    Forecasting daily attendances at an emergency department to aid resource planning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Accurate forecasting of emergency department (ED) attendances can be a valuable tool for micro and macro level planning.</p> <p>Methods</p> <p>Data for analysis was the counts of daily patient attendances at the ED of an acute care regional general hospital from July 2005 to Mar 2008. Patients were stratified into three acuity categories; i.e. P1, P2 and P3, with P1 being the most acute and P3 being the least acute. The autoregressive integrated moving average (ARIMA) method was separately applied to each of the three acuity categories and total patient attendances. Independent variables included in the model were public holiday (yes or no), ambient air quality measured by pollution standard index (PSI), daily ambient average temperature and daily relative humidity. The seasonal components of weekly and yearly periodicities in the time series of daily attendances were also studied. Univariate analysis by t-tests and multivariate time series analysis were carried out in SPSS version 15.</p> <p>Results</p> <p>By time series analyses, P1 attendances did not show any weekly or yearly periodicity and was only predicted by ambient air quality of PSI > 50. P2 and total attendances showed weekly periodicities, and were also significantly predicted by public holiday. P3 attendances were significantly correlated with day of the week, month of the year, public holiday, and ambient air quality of PSI > 50.</p> <p>After applying the developed models to validate the forecast, the MAPE of prediction by the models were 16.8%, 6.7%, 8.6% and 4.8% for P1, P2, P3 and total attendances, respectively. The models were able to account for most of the significant autocorrelations present in the data.</p> <p>Conclusion</p> <p>Time series analysis has been shown to provide a useful, readily available tool for predicting emergency department workload that can be used to plan staff roster and resource planning.</p

    Recent changes of water discharge and sediment load in the Yellow River basin, China

    Get PDF
    The Yellow River basin contributes approximately 6% of the sediment load from all river systems globally, and the annual runoff directly supports 12% of the Chinese population. As a result, describing and understanding recent variations of water discharge and sediment load under global change scenarios are of considerable importance. The present study considers the annual hydrologic series of the water discharge and sediment load of the Yellow River basin obtained from 15 gauging stations (10 mainstream, 5 tributaries). The Mann-Kendall test method was adopted to detect both gradual and abrupt change of hydrological series since the 1950s. With the exception of the area draining to the Upper Tangnaihai station, results indicate that both water discharge and sediment load have decreased significantly (p&lt;0.05). The declining trend is greater with distance downstream, and drainage area has a significant positive effect on the rate of decline. It is suggested that the abrupt change of the water discharge from the late 1980s to the early 1990s arose from human extraction, and that the abrupt change in sediment load was linked to disturbance from reservoir construction.Geography, PhysicalGeosciences, MultidisciplinarySCI(E)43ARTICLE4541-5613
    • 

    corecore